16 research outputs found

    Bidirectional-Convolutional LSTM Based Spectral-Spatial Feature Learning for Hyperspectral Image Classification

    Full text link
    This paper proposes a novel deep learning framework named bidirectional-convolutional long short term memory (Bi-CLSTM) network to automatically learn the spectral-spatial feature from hyperspectral images (HSIs). In the network, the issue of spectral feature extraction is considered as a sequence learning problem, and a recurrent connection operator across the spectral domain is used to address it. Meanwhile, inspired from the widely used convolutional neural network (CNN), a convolution operator across the spatial domain is incorporated into the network to extract the spatial feature. Besides, to sufficiently capture the spectral information, a bidirectional recurrent connection is proposed. In the classification phase, the learned features are concatenated into a vector and fed to a softmax classifier via a fully-connected operator. To validate the effectiveness of the proposed Bi-CLSTM framework, we compare it with several state-of-the-art methods, including the CNN framework, on three widely used HSIs. The obtained results show that Bi-CLSTM can improve the classification performance as compared to other methods

    Cascaded Recurrent Neural Networks for Hyperspectral Image Classification

    Get PDF
    By considering the spectral signature as a sequence, recurrent neural networks (RNNs) have been successfully used to learn discriminative features from hyperspectral images (HSIs) recently. However, most of these models only input the whole spectral bands into RNNs directly, which may not fully explore the specific properties of HSIs. In this paper, we propose a cascaded RNN model using gated recurrent units (GRUs) to explore the redundant and complementary information of HSIs. It mainly consists of two RNN layers. The first RNN layer is used to eliminate redundant information between adjacent spectral bands, while the second RNN layer aims to learn the complementary information from non-adjacent spectral bands. To improve the discriminative ability of the learned features, we design two strategies for the proposed model. Besides, considering the rich spatial information contained in HSIs, we further extend the proposed model to its spectral-spatial counterpart by incorporating some convolutional layers. To test the effectiveness of our proposed models, we conduct experiments on two widely used HSIs. The experimental results show that our proposed models can achieve better results than the compared models

    Classification of Hyperspectral and LiDAR Data Using Coupled CNNs

    Get PDF
    In this paper, we propose an efficient and effective framework to fuse hyperspectral and Light Detection And Ranging (LiDAR) data using two coupled convolutional neural networks (CNNs). One CNN is designed to learn spectral-spatial features from hyperspectral data, and the other one is used to capture the elevation information from LiDAR data. Both of them consist of three convolutional layers, and the last two convolutional layers are coupled together via a parameter sharing strategy. In the fusion phase, feature-level and decision-level fusion methods are simultaneously used to integrate these heterogeneous features sufficiently. For the feature-level fusion, three different fusion strategies are evaluated, including the concatenation strategy, the maximization strategy, and the summation strategy. For the decision-level fusion, a weighted summation strategy is adopted, where the weights are determined by the classification accuracy of each output. The proposed model is evaluated on an urban data set acquired over Houston, USA, and a rural one captured over Trento, Italy. On the Houston data, our model can achieve a new record overall accuracy of 96.03%. On the Trento data, it achieves an overall accuracy of 99.12%. These results sufficiently certify the effectiveness of our proposed model

    Comparing the Performance of Neural Network and Deep Convolutional Neural Network in Estimating Soil Moisture from Satellite Observations

    No full text
    Soil moisture (SM) plays an important role in hydrological cycle and weather forecasting. Satellite provides the only viable approach to regularly observe large-scale SM dynamics. Conventionally, SM is estimated from satellite observations based on the radiative transfer theory. Recent studies have demonstrated that the neural network (NN) method can retrieve SM with comparable accuracy as conventional methods. Here, we are interested in whether the NN model with more complex structures, namely deep convolutional neural network (DCNN), can bring about further improvement in SM retrievals when compared with the NN model used in recent studies. To achieve this objective, the same input data are used for the DCNN and NN models, including L-band Soil Moisture and Ocean Salinity (SMOS) brightness temperature (TB), C-band Advanced Scatterometer (ASCAT) backscattering coefficients, Moderate Resolution Imaging Spectroradiometer (MODIS) Normalized Difference Vegetation Index (NDVI) and soil temperature. The target SM used to train the DCNN and NN models is the European Center for Medium-range Weather Forecasts Re-Analysis Interim (ERA-Interim) product. The experiment consists of two phases: the learning phase from 1 January to 31 December 2015 and the testing phase from 1 January to 31 December 2016. In the learning phase, we train the DCNN and NN models using the ERA-Interim SM. When evaluation between DCNN and NN against in situ measurements in the testing phase, we find that the temporal correlations between DCNN SM and in situ measurements are higher than those between NN SM and in situ measurements by 6 . 2 % and 2 . 5 % on ascending and descending orbits, respectively. In addition, from the perspective of temporal and spatial dynamics, the simulated SM values by DCNN/NN and the ERA-Interim SM agree relatively well at a global scale. Results suggest that both NN and DCNN models are effective in estimating SM from satellite observations, and DCNN can achieve slightly better performance than NN

    Cascaded Recurrent Neural Networks for Hyperspectral Image Classification

    No full text

    Hypergraph Embedding for Spatial-Spectral Joint Feature Extraction in Hyperspectral Images

    No full text
    The fusion of spatial and spectral information in hyperspectral images (HSIs) is useful for improving the classification accuracy. However, this approach usually results in features of higher dimension and the curse of the dimensionality problem may arise resulting from the small ratio between the number of training samples and the dimensionality of features. To ease this problem, we propose a novel algorithm for spatial-spectral feature extraction based on hypergraph embedding. Firstly, each HSI pixel is regarded as a vertex and the joint of extended morphological profiles (EMP) and spectral features is adopted as the feature associated with the vertex. A hypergraph is then constructed by the K-Nearest-Neighbor method, in which each pixel and its most K relevant pixels are linked as one hyperedge to represent the complex relationships between HSI pixels. Secondly, the hypergraph embedding model is designed to learn a low dimensional feature with the reservation of geometric structure of HSI. An adaptive hyperedge weight estimation scheme is also introduced to preserve the prominent hyperedges by the regularization constraint on the weight. Finally, the learned low-dimensional features are fed to the support vector machine (SVM) for classification. The experimental results on three benchmark hyperspectral databases are presented. They highlight the importance of spatial–spectral joint features embedding for the accurate classification of HSI data. The weight estimation is better for further improving the classification accuracy. These experimental results verify the proposed method

    Deep Encoder-Decoder Networks for Classification of Hyperspectral and LiDAR Data

    No full text
    corecore